Proximal extrapolated gradient methods for variational inequalities

نویسنده

  • Yu Malitsky
چکیده

The paper concerns with novel first-order methods for monotone variational inequalities. They use a very simple linesearch procedure that takes into account a local information of the operator. Also, the methods do not require Lipschitz continuity of the operator and the linesearch procedure uses only values of the operator. Moreover, when the operator is affine our linesearch becomes very simple, namely, it needs only simple vector-vector operations. For all our methods, we establish the ergodic convergence rate. In addition, we modify one of the proposed methods for the case of a composite minimization. Preliminary results from numerical experiments are quite promising.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

An inexact alternating direction method with SQP regularization for the structured variational inequalities

In this paper, we propose an inexact alternating direction method with square quadratic proximal  (SQP) regularization for  the structured variational inequalities. The predictor is obtained via solving SQP system  approximately  under significantly  relaxed accuracy criterion  and the new iterate is computed directly by an explicit formula derived from the original SQP method. Under appropriat...

متن کامل

An inexact proximal algorithm for variational inequalities

This paper presents a new inexact proximal method for solving monotone variational inequality problems with a given separable structure. The resulting method combines the recent proximal distances theory introduced by Auslender and Teboulle (2006) with a decomposition method given by Chen and Teboulle that was proposed to solve convex optimization problems. This method extends and generalizes p...

متن کامل

Decomposition Techniques for Bilinear Saddle Point Problems and Variational Inequalities with Affine Monotone Operators

The majority of First Order methods for large-scale convex-concave saddle point problems and variational inequalities with monotone operators are proximal algorithms which at every iteration need to minimize over problem’s domain X the sum of a linear form and a strongly convex function. To make such an algorithm practical, X should be proximal-friendly – admit a strongly convex function with e...

متن کامل

Convergence rate analysis of iteractive algorithms for solving variational inequality problems

We present a unified convergence rate analysis of iterative methods for solving the variational inequality problem. Our results are based on certain error bounds; they subsume and extend the linear and sublinear rates of convergence established in several previous studies. We also derive a new error bound for γ -strictly monotone variational inequalities. The class of algorithms covered by our ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره 33  شماره 

صفحات  -

تاریخ انتشار 2018